Skip to content

Experimental data

Loading and working with experimental data¤

import x_xy

import jax
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt

import mediapy as media
import vispy

# other backends are fine too, just not `jupyter_rfb`. You can install `pyqt6` using `pip install pyqt6`
vispy.use("pyqt6")

def show_video(sys, xs: x_xy.Transform) -> None:
    assert sys.dt == 0.01
    # only render every fourth to get a framerate of 25 fps
    frames = x_xy.render_frames(sys, [xs[i] for i in range(0, xs.shape(), 4)], size=(640, 480))
    # convert rgba to rgb
    frames = [frame[..., :3] for frame in frames]
    media.show_video(frames, fps=25)

Experimental data and system definitions of the experimental setup are located in the sub-package exp_data

from x_xy.subpkgs import exp_data

Multiple experimental trials will be made available. Right now only the trial with identifier S_06 is available.

exp_id = "S_06"
# let's use segment 5 as anchor
sys = exp_data.load_sys(exp_id, morph_yaml_key="seg5")

Let's first take a look at the system that was used in the experiments.

state = x_xy.State.create(sys)
# update the maximal coordinates
xs = x_xy.forward_kinematics(sys, state)[1].x.batch()
show_video(sys, xs)
Rendering frames..: 100%|██████████| 1/1 [00:00<00:00, 10.71it/s]

As you can see a five segment kinematic chain was moved, and for each segment IMU measurements and OMC ground truth is available.

Let's load this (no simulated) IMU and OMC data.

# `canonical` is the identifier of the first motion pattern performed in this trial
# `shaking` is the identifier of the last motion pattern performed in this trial
data = exp_data.load_data(exp_id, motion_start="canonical", motion_stop="shaking")
data.keys()
dict_keys(['seg1', 'seg2', 'seg3', 'seg4', 'seg5'])
data["seg1"].keys()
dict_keys(['imu_flex', 'imu_rigid', 'pos', 'quat'])
data["seg1"]["imu_rigid"].keys()
dict_keys(['acc', 'gyr', 'mag'])

The quaternion quat is to be interpreted as the rotation from segment to an arbitrary OMC inertial frame.

The position pos is to be interpreted as the position vector from arbitrary OMC inertial frame to a specific marker on the respective segment (vector given in the OMC inertial frame).

Then, for each segment actually two IMUs are attached to it. One is rigidly attached, one is non-rigidly attached (via foam).

Also, how long is the trial?

data["seg1"]["pos"].shape
(34300, 3)

It's 343 seconds of data.

Let's take a look at the motion of the whole trial.

To render it, we need maximal coordinates xs of all links in the system. Luckily, there exists a function inside sub-package sim2real that does this already (but please check out the source).

from x_xy.subpkgs import sim2real

xs = sim2real.xs_from_raw(sys, data)
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[39], line 3
      1 from x_xy.subpkgs import sim2real
----> 3 xs = sim2real.xs_from_raw(sys, data)

File ~/Documents/PYTHON/x_xy_v2/x_xy/subpkgs/sim2real/sim2real.py:94, in xs_from_raw(sys, link_name_pos_rot, t1, t2, eps_frame, qinv)
     91     t = x_xy.algebra.transform_mul(t, x_xy.algebra.transform_inv(t_eps))
     92     xs.append(t)
---> 94 scan_sys(sys, f, "l", sys.link_names)
     96 # stack and permute such that time-axis is 0-th axis
     97 xs = xs[0].batch(*xs[1:])

File ~/Documents/PYTHON/x_xy_v2/x_xy/scan.py:55, in scan_sys(sys, f, in_types, reverse, *args)
     53 for link_idx in order:
     54     args_link = [arg[idx_map[t](link_idx)] for arg, t in zip(args, in_types)]
---> 55     y = f(y, idx_map, *args_link)
     56     ys.append(y)
     58 if reverse:

File ~/Documents/PYTHON/x_xy_v2/x_xy/subpkgs/sim2real/sim2real.py:85, in xs_from_raw.<locals>.f(_, __, link_name)
     83 def f(_, __, link_name: str):
     84     q, pos = (
---> 85         link_name_pos_rot[link_name]["quat"],
     86         link_name_pos_rot[link_name]["pos"],
     87     )
     88     if qinv:
     89         q = x_xy.maths.quat_inv(q)

KeyError: 'imu1'

Oops. What happened?

The raw data data does not contain the pos and quat information of the IMUs. After all they didn't have any OMC markers attached to them.

So, we can't render the IMU boxes. Thus, we will have to delete those imu links from the system for rendering.

We can do this with the sys_composer sub-package.

from x_xy.subpkgs import sys_composer
sys.link_names
['seg5',
 'seg1',
 'imu1',
 'imu5',
 'seg2',
 'imu2',
 'seg3',
 'imu3',
 'seg4',
 'imu4']
sys_noimu = sys_composer.delete_subsystem(sys, [f"imu{i}" for i in range(1, 6)])

Let's try again

xs = sim2real.xs_from_raw(sys_noimu, data)
show_video(sys_noimu, xs)
Rendering frames..: 100%|██████████| 8575/8575 [00:43<00:00, 195.24it/s]

Perfect. This is a rendered animation of the real experimental motion that was performed. You can see that the spacing between segments is large.

This is because in our idealistic system model joints have no spatial dimension but in reality they have. The entire setup is 3D printed, and the joints are also several centimeters long.

The segments are 20cm long.

We can use this experimental data to validate our simulated approaches or validate ML models that are learned on simulated training data.